🔔 FCM Loaded

Senior Data Engineer – Data Warehouse (Python | SQL | dbt | Snowflake/Databricks)

Saven Technologies

5 - 10 years

Hyderabad

Posted: 04/01/2026

Getting a referral is 5x more effective than applying directly

Job Description

Location: Hyderabad, India (Remote / Hybrid)

Experience: 510 years

Employment Type: Full-time

Compensation: Competitive (top of market for strong candidates)


About the Role

We are hiring Senior Data Engineers / Analytics Engineers to build and operate modern cloud data warehouse platforms for US hedge funds and investment firms .

This is a hands-on backend data engineering role focused on Python, SQL, dbt, and cloud data warehousing not a dashboard or reporting-only position.

If you enjoy fixing broken data pipelines , cleaning messy raw data , optimizing warehouse queries , and building reliable dbt models , this role is for you.

What You Will Do
  • Build, maintain, and optimize ETL / ELT data pipelines using Python, SQL, and dbt
  • Ingest raw data from CSV, Excel, APIs, S3 / ADLS and normalize into clean warehouse models
  • Design staging intermediate mart layers using dbt
  • Debug, repair, and backfill production data pipelines
  • Optimize Snowflake / Databricks / Redshift / Synapse query performance
  • Implement data quality checks, validations, and monitoring
  • Handle schema drift, late-arriving data, and vendor inconsistencies
  • Work closely with analysts, quants, and US-based stakeholders
  • Own reliability, correctness, and scalability of data systems
Required Skills (Must-Have)
  • Strong hands-on experience as a Data Engineer / Analytics Engineer
  • Excellent Python skills for data processing and pipeline development
  • Advanced SQL skills (CTEs, window functions, query optimization)
  • Real-world experience with dbt (models, tests, YAML, transformations)
  • Experience with cloud data warehouses :
  • Snowflake , Databricks , Redshift , or Azure Synapse
  • Experience building ETL / ELT pipelines in cloud environments
  • Ability to debug and fix broken or unreliable data pipelines
  • Experience working with large, messy, real-world datasets
Good to Have
  • Spark / PySpark experience
  • Airflow or workflow orchestration tools
  • Experience with financial, payments, or time-series data
  • Knowledge of data quality frameworks and testing
  • Experience working with remote or global teams
What This Role Is NOT
  • Not a Power BI / Tableau / Looker-only role
  • Not a low-code or drag-and-drop ETL role
  • Not a reporting-focused analytics role

This is a backend data engineering & data warehouse role .

Why Join Us
  • Work on high-impact data platforms used by global hedge funds
  • Strong engineering culture focused on clean code and ownership
  • Competitive compensation aligned with top Hyderabad market talent
  • Remote-friendly with flexible work options
  • Opportunity to grow into Lead Data Engineer / Architect roles
How to Apply

Apply directly on LinkedIn or message us if you have experience in Python-based data pipelines, dbt modeling, and cloud data warehousing .

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.